245 research outputs found

    Binary Hypothesis Testing Game with Training Data

    Full text link
    We introduce a game-theoretic framework to study the hypothesis testing problem, in the presence of an adversary aiming at preventing a correct decision. Specifically, the paper considers a scenario in which an analyst has to decide whether a test sequence has been drawn according to a probability mass function (pmf) P_X or not. In turn, the goal of the adversary is to take a sequence generated according to a different pmf and modify it in such a way to induce a decision error. P_X is known only through one or more training sequences. We derive the asymptotic equilibrium of the game under the assumption that the analyst relies only on first order statistics of the test sequence, and compute the asymptotic payoff of the game when the length of the test sequence tends to infinity. We introduce the concept of indistinguishability region, as the set of pmf's that can not be distinguished reliably from P_X in the presence of attacks. Two different scenarios are considered: in the first one the analyst and the adversary share the same training sequence, in the second scenario, they rely on independent sequences. The obtained results are compared to a version of the game in which the pmf P_X is perfectly known to the analyst and the adversary

    A new Backdoor Attack in CNNs by training set corruption without label poisoning

    Full text link
    Backdoor attacks against CNNs represent a new threat against deep learning systems, due to the possibility of corrupting the training set so to induce an incorrect behaviour at test time. To avoid that the trainer recognises the presence of the corrupted samples, the corruption of the training set must be as stealthy as possible. Previous works have focused on the stealthiness of the perturbation injected into the training samples, however they all assume that the labels of the corrupted samples are also poisoned. This greatly reduces the stealthiness of the attack, since samples whose content does not agree with the label can be identified by visual inspection of the training set or by running a pre-classification step. In this paper we present a new backdoor attack without label poisoning Since the attack works by corrupting only samples of the target class, it has the additional advantage that it does not need to identify beforehand the class of the samples to be attacked at test time. Results obtained on the MNIST digits recognition task and the traffic signs classification task show that backdoor attacks without label poisoning are indeed possible, thus raising a new alarm regarding the use of deep learning in security-critical applications

    An Improved Statistic for the Pooled Triangle Test against PRNU-Copy Attack

    Full text link
    We propose a new statistic to improve the pooled version of the triangle test used to combat the fingerprint-copy counter-forensic attack against PRNU-based camera identification [1]. As opposed to the original version of the test, the new statistic exploits the one-tail nature of the test, weighting differently positive and negative deviations from the expected value of the correlation between the image under analysis and the candidate images, i.e., those image suspected to have been used during the attack. The experimental results confirm the superior performance of the new test, especially when the conditions of the test are challenging ones, that is when the number of images used for the fingerprint-copy attack is large and the size of the image under test is small.Comment: submitted to IEEE Signal Processing Letter

    A Message Passing Approach for Decision Fusion in Adversarial Multi-Sensor Networks

    Full text link
    We consider a simple, yet widely studied, set-up in which a Fusion Center (FC) is asked to make a binary decision about a sequence of system states by relying on the possibly corrupted decisions provided by byzantine nodes, i.e. nodes which deliberately alter the result of the local decision to induce an error at the fusion center. When independent states are considered, the optimum fusion rule over a batch of observations has already been derived, however its complexity prevents its use in conjunction with large observation windows. In this paper, we propose a near-optimal algorithm based on message passing that greatly reduces the computational burden of the optimum fusion rule. In addition, the proposed algorithm retains very good performance also in the case of dependent system states. By first focusing on the case of small observation windows, we use numerical simulations to show that the proposed scheme introduces a negligible increase of the decision error probability compared to the optimum fusion rule. We then analyse the performance of the new scheme when the FC make its decision by relying on long observation windows. We do so by considering both the case of independent and Markovian system states and show that the obtained performance are superior to those obtained with prior suboptimal schemes. As an additional result, we confirm the previous finding that, in some cases, it is preferable for the byzantine nodes to minimise the mutual information between the sequence system states and the reports submitted to the FC, rather than always flipping the local decision

    A Game-Theoretic Framework for Optimum Decision Fusion in the Presence of Byzantines

    Full text link
    Optimum decision fusion in the presence of malicious nodes - often referred to as Byzantines - is hindered by the necessity of exactly knowing the statistical behavior of Byzantines. By focusing on a simple, yet widely studied, set-up in which a Fusion Center (FC) is asked to make a binary decision about a sequence of system states by relying on the possibly corrupted decisions provided by local nodes, we propose a game-theoretic framework which permits to exploit the superior performance provided by optimum decision fusion, while limiting the amount of a-priori knowledge required. We first derive the optimum decision strategy by assuming that the statistical behavior of the Byzantines is known. Then we relax such an assumption by casting the problem into a game-theoretic framework in which the FC tries to guess the behavior of the Byzantines, which, in turn, must fix their corruption strategy without knowing the guess made by the FC. We use numerical simulations to derive the equilibrium of the game, thus identifying the optimum behavior for both the FC and the Byzantines, and to evaluate the achievable performance at the equilibrium. We analyze several different setups, showing that in all cases the proposed solution permits to improve the accuracy of data fusion. We also show that, in some instances, it is preferable for the Byzantines to minimize the mutual information between the status of the observed system and the reports submitted to the FC, rather than always flipping the decision made by the local nodes as it is customarily assumed in previous works

    Attacking and Defending Printer Source Attribution Classifiers in the Physical Domain

    Get PDF
    The security of machine learning classifiers has received increasing attention in the last years. In forensic applications, guaranteeing the security of the tools investigators rely on is crucial, since the gathered evidence may be used to decide about the innocence or the guilt of a suspect. Several adversarial attacks were proposed to assess such security, with a few works focusing on transferring such attacks from the digital to the physical domain. In this work, we focus on physical domain attacks against source attribution of printed documents. We first show how a simple reprinting attack may be sufficient to fool a model trained on images that were printed and scanned only once. Then, we propose a hardened version of the classifier trained on the reprinted attacked images. Finally, we attack the hardened classifier with several attacks, including a new attack based on the Expectation Over Transformation approach, which finds the adversarial perturbations by simulating the physical transformations occurring when the image attacked in the digital domain is printed again. The results we got demonstrate a good capability of the hardened classifier to resist attacks carried out in the physical domai

    Compressive Hyperspectral Imaging Using Progressive Total Variation

    Get PDF
    Compressed Sensing (CS) is suitable for remote acquisition of hyperspectral images for earth observation, since it could exploit the strong spatial and spectral correlations, llowing to simplify the architecture of the onboard sensors. Solutions proposed so far tend to decouple spatial and spectral dimensions to reduce the complexity of the reconstruction, not taking into account that onboard sensors progressively acquire spectral rows rather than acquiring spectral channels. For this reason, we propose a novel progressive CS architecture based on separate sensing of spectral rows and joint reconstruction employing Total Variation. Experimental results run on raw AVIRIS and AIRS images confirm the validity of the proposed system.Comment: To be published on ICASSP 2014 proceeding
    • …
    corecore